94 research outputs found

    Feature extraction from ear-worn sensor data for gait analysis

    Get PDF
    Gait analysis has a significant role in assessing human's walking pattern. It is generally used in sports science for understanding body mechanics, and it is also used to monitor patients' neuro-disorder related gait abnormalities. Traditional marker-based systems are well known for tracking gait parameters for gait analysis, however, it requires long set up time therefore very difficult to be applied in everyday realtime monitoring. Nowadays, there is ever growing of interest in developing portable devices and their supporting software with novel algorithms for gait pattern analysis. The aim of this research is to investigate the possibilities of novel gait pattern detection algorithms for accelerometer-based sensors. In particular, we have used e-AR sensor, an ear-worn sensor which registers body motion via its embedded 3-D accelerom-eter. Gait data was given semantic annotation using pressure mat as well as real-time video recording. Important time stamps within a gait cycle, which are essential for extracting meaningful gait parameters, were identified. Furthermore, advanced signal processing algorithm was applied to perform automatic feature extraction by signal decomposition and reconstruction. Analysis on real-word data has demonstrated the potential for an accelerometer-based sensor system and its ability to extract of meaningful gait parameters

    Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition

    Full text link
    The success of most advanced facial expression recognition works relies heavily on large-scale annotated datasets. However, it poses great challenges in acquiring clean and consistent annotations for facial expression datasets. On the other hand, self-supervised contrastive learning has gained great popularity due to its simple yet effective instance discrimination training strategy, which can potentially circumvent the annotation issue. Nevertheless, there remain inherent disadvantages of instance-level discrimination, which are even more challenging when faced with complicated facial representations. In this paper, we revisit the use of self-supervised contrastive learning and explore three core strategies to enforce expression-specific representations and to minimize the interference from other facial attributes, such as identity and face styling. Experimental results show that our proposed method outperforms the current state-of-the-art self-supervised learning methods, in terms of both categorical and dimensional facial expression recognition tasks.Comment: Accepted to BMVC 202

    Design and Prototyping of a Bio-inspired Kinematic Sensing Suit for the Shoulder Joint: Precursor to a Multi-DoF Shoulder Exosuit

    Get PDF
    Soft wearable robots are a promising new design paradigm for rehabilitation and active assistance applications. Their compliant nature makes them ideal for complex joints like the shoulder, but intuitive control of these robots require robust and compliant sensing mechanisms. In this work, we introduce the sensing framework for a multi-DoF shoulder exosuit capable of sensing the kinematics of the shoulder joint. The proposed tendon-based sensing system is inspired by the concept of muscle synergies, the body's sense of proprioception, and finds its basis in the organization of the muscles responsible for shoulder movements. A motion-capture-based evaluation of the developed sensing system showed conformance to the behaviour exhibited by the muscles that inspired its routing and validates the hypothesis of the tendon-routing to be extended to the actuation framework of the exosuit in the future. The mapping from multi-sensor space to joint space is a multivariate multiple regression problem and was derived using an Artificial Neural Network (ANN). The sensing framework was tested with a motion-tracking system and achieved performance with root mean square error (RMSE) of approximately 5.43 degrees and 3.65 degrees for the azimuth and elevation joint angles, respectively, measured over 29000 frames (4+ minutes) of motion-capture data.Comment: 8 pages, 7 figures, 1 tabl

    Micro-object pose estimation with sim-to-real transfer learning using small dataset

    Get PDF
    International audience<span style="color: rgb(34, 34, 34); font-family: -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen-Sans, Ubuntu, Cantarell, &quot;Helvetica Neue&quot;, sans-serif; font-size: 18px;"&gtThree-dimensional (3D) pose estimation of micro/nano-objects isessential for the implementation of automatic manipulation inmicro/nano-robotic systems. However, out-of-plane pose estimationof a micro/nano-object is challenging, since the images aretypically obtained in 2D using a scanning electron microscope (SEM)or an optical microscope (OM). Traditional deep learning basedmethods require the collection of a large amount of labeled datafor model training to estimate the 3D pose of an object from amonocular image. Here we present a sim-to-real learning-to-matchapproach for 3D pose estimation of micro/nano-objects. Instead ofcollecting large training datasets, simulated data is generated toenlarge the limited experimental data obtained in practice, whilethe domain gap between the generated and experimental data isminimized via image translation based on a generative adversarialnetwork (GAN) model. A learning-to-match approach is used to mapthe generated data and the experimental data to a low-dimensionalspace with the same data distribution for different pose labels,which ensures effective feature embedding. Combining the labeleddata obtained from experiments and simulations, a new trainingdataset is constructed for robust pose estimation. The proposedmethod is validated with images from both SEM and OM, facilitatingthe development of closed-loop control of micro/nano-objects withcomplex shapes in micro/nano-robotic systems.</span&g

    Nonlinearity Compensation in a Multi-DoF Shoulder Sensing Exosuit for Real-Time Teleoperation

    Get PDF
    The compliant nature of soft wearable robots makes them ideal for complex multiple degrees of freedom (DoF) joints, but also introduce additional structural nonlinearities. Intuitive control of these wearable robots requires robust sensing to overcome the inherent nonlinearities. This paper presents a joint kinematics estimator for a bio-inspired multi-DoF shoulder exosuit capable of compensating the encountered nonlinearities. To overcome the nonlinearities and hysteresis inherent to the soft and compliant nature of the suit, we developed a deep learning-based method to map the sensor data to the joint space. The experimental results show that the new learning-based framework outperforms recent state-of-the-art methods by a large margin while achieving 12ms inference time using only a GPU-based edge-computing device. The effectiveness of our combined exosuit and learning framework is demonstrated through real-time teleoperation with a simulated NAO humanoid robot.Comment: 8 pages, 7 figures, 3 tables. Accepted to be published in IEEE RoboSoft 202

    Toward a Topography of Cross-Cultural Theatre Praxis

    Get PDF
    In this essay we attempt to map out a conceptual framework for analyzing a cluster of related practices subsumed under the broad banner of "cross-cultural theatre". For the purposes of our discussion, cross-cultural theatre encompasses public performance practices characterized by the conjunction of specific cultural resources at the level of narrative content, performance aesthetics, production processes, and/or reception by an interpretive community. The cultural resources at issue may be material or symbolic, taking the form of particular objects or properties, languages, myths, rituals, embodied techniques, training methods, and visual practices - or what James Brandon calls "cultural fragments" (1990:92). Cross-cultural theatre inevitably entails a process of encounter and negotiation between different cultural sensibilities, although the degree to which this is discernible in any performance event will vary considerably depending on the artistic capital brought to a project as well as the location and working processes involved in its development and execution

    Large AI Models in Health Informatics: Applications, Challenges, and the Future

    Full text link
    Large AI models, or foundation models, are models recently emerging with massive scales both parameter-wise and data-wise, the magnitudes of which can reach beyond billions. Once pretrained, large AI models demonstrate impressive performance in various downstream tasks. A prime example is ChatGPT, whose capability has compelled people's imagination about the far-reaching influence that large AI models can have and their potential to transform different domains of our lives. In health informatics, the advent of large AI models has brought new paradigms for the design of methodologies. The scale of multi-modal data in the biomedical and health domain has been ever-expanding especially since the community embraced the era of deep learning, which provides the ground to develop, validate, and advance large AI models for breakthroughs in health-related areas. This article presents a comprehensive review of large AI models, from background to their applications. We identify seven key sectors in which large AI models are applicable and might have substantial influence, including 1) bioinformatics; 2) medical diagnosis; 3) medical imaging; 4) medical informatics; 5) medical education; 6) public health; and 7) medical robotics. We examine their challenges, followed by a critical discussion about potential future directions and pitfalls of large AI models in transforming the field of health informatics.Comment: This article has been accepted for publication in IEEE Journal of Biomedical and Health Informatic

    Egocentric Image Captioning for Privacy-Preserved Passive Dietary Intake Monitoring

    Get PDF
    Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviours of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this paper, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning to dietary intake assessment in real life settings
    • 

    corecore